ALife for Real and Virtual Audio-Video Performances
نویسندگان
چکیده
MAG (an Italian acronym which stands for Musical Genetic Algorithms) is an electronic art piece in which a multifaceted software attempts to “translate” musical expression into a corresponding static or animated graphical expressions. The mechanism at the base of such “translation” consists in a quite complex and articulated algorithm that, in short, is based on artificial learning. Indeed, MAG implements different learning techniques to allow artificial agents to learn about music flow by developing an adaptive behaviour. In our specific case, such a technique consists of a population of neural networks – one dimensional artificial agents that populate their two dimensional artificial world, and which are served by a simple input output control system – that can use both genetic and reinforcement learning algorithms to evolve appropriate behavioural answers to an impressively large shapes of inputs, through both a fitness formula based genetic pressure, and, eventually, a user-machine based feedbacks. More closely, in the first version of MAG algorithm the agents’ control system is a perceptron; the world of the agents is a two dimensional grid that changes its dimensions accordingly to the host-screen; the most important input artificial agents get (i.e. not necessarily the only one) is the musical wave that any given musical file produces, run-time; the output is the behavioural answer that agents produce by moving, and thereby drawing on to a computer screen, therefore graphical. The combination of artificial evolution and the flows of a repeated song or different musical tunes make it possible for the software to obtain a special relationship between sound waves and the aesthetics of consequent graphical results. Further, we started to explore the concept of run-time creation of both music and graphical expression. Recently, we developed a software by which it is possible to allow any user to create new song versions of popular music with the MusicTiles app simply by connecting musical building blocks. This creation of musical expression can happen as a performance (i.e. run-time). When connecting the MusicTiles app to the MAG software, we provide the connection and the possibility to melt both musical expression and graphical expression in parallel and at run-time, and therefore creating an audio-video performance that is always unique.
منابع مشابه
IS1-2 ALife for Real and Virtual Audio-Video Performances
Several sub-disciplines of engineering are driven by the researchers’ aim of providing positive change to the society through their engineering. Based on two decades research in developing engineering systems with a societal impact (e.g. in robotics, embodied AI, and playware), in this paper we suggest a cyclic research method based on a mix between participatory and experimental processes. In ...
متن کاملThirdspace: The Trialectics of the Real, Virtual and Blended Spaces
This article aims to redefine the concept of Thirdspace and make a trilateral relationship between the three concepts of real space, virtual space and the user. To do so, not only the concept of Thirdspace has to be redefined, but also a new understanding of virtual space as a relatively independent space is required. This three-sided relation requires a new understanding of the relationship be...
متن کاملRt-mediacycle : towards a Real-time Use of Mediacycle in Performances and Video Installations
This project aims at developing tools to extend the use of the MediaCycle software to live performances and installations. In particular, this requires real-time recording and analysis of media streams (e.g., video and audio). These tools are developed as patches for the Puredata (Pd) software [10] which is widely used by the artistic community. This project is a collaboration with the Belgian ...
متن کاملVirtual Coaches over Mobile Video
We hypothesize that the context of a smartphone, how a virtual human is presented within a smartphone app, and indeed, the nature of that app, can profoundly affect how the virtual human is perceived by a real human. We believe that virtual humans, presented over video chat services (such as Skype) and delivered using mobile phones, can be an effective way to deliver coaching applications. We p...
متن کاملSound Analyser: A Plug-In for Real-Time Audio Analysis in Live Performances and Installations
Real-time audio analysis has great potential for being used to create musically responsive applications in live performances. There have been many examples of such use, including sound-responsive visualisations, adaptive audio effects and machine musicianship. However, at present, using audio analysis algorithms in live performance requires either some detailed knowledge about the algorithms th...
متن کامل